Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prisha Khichadi
DOI Link: https://doi.org/10.22214/ijraset.2023.52267
Certificate: View Certificate
This paper analyzes the current usage, suggests related applications, and points out possible improvements in the field of artificial intelligence in space exploration. There are 5 sections, each attributed to a certain AI-oriented area of space exploration, with most of the arguments being centred around data analysis. The first section inspects navigation in the booster (stage 1) landing systems in the Falcon 9 and its development with the help of an algorithm which has immense potential to be applied elsewhere. The second section spotlights the Earth surface Mineral dust source InvesTigation project or EMIT, acknowledging its cloud obstruction clause. The third section talks about an automated exoplanet detection algorithm and suggests improved instrumentation for better results. The fourth section is regarding the data calibration and classification pipeline in the James Webb Space Telescope and its highly advanced Morpheus model. The fifth section focuses on the autonomous mission operations in Martian rovers with special emphasis on the latest rovers- Curiosity & Perseverance, focusing on improvements made in them.
I. INTRODUCTION
Artificial intelligence is used in multiple spheres of space exploration. However, it is still not developed enough to be used independently or extensively. It still relies quite a lot on human support. Additionally, in the places where it has been implemented, it’s human driven instead of machine learning or deep learning oriented. The long-term goal of the industry incorporates increased usage of AI. It is especially needed in the cases that involve huge data sets, on the spot decision making capabilities and complex repetitive algorithms. Thus, the integration of advanced AI in space operations must be speeded up. Recognising its present condition in its main applications along with acknowledging its shortcomings is just the first step to promote its advancement. The paper aims to address common problems faced in space operations such as difficulty with landing precision. AI was first implemented in space operations 25 years ago in 1998. The Deep Space 1 probe conducted the comet Borrelly and asteroid 9969 Braille. The algorithm used was called “remote agent”. Needless to say, we have come a long way since then but still have a long way to go.
II. ANALYZING APPLICATIONS
A. Falcon 9 booster landing systems
Examining the design of the Falcon 9, the rocket separates into 2 stages in the upper atmosphere of the Earth. The second stage does the job of carrying the payload into space whereas the first stage returns back to the Earth to be reused. It automatically aligns itself to maintain its dictated path based upon its current position and lands vertically on the landing pad. It readjusts itself if deviated from the flight path.
After all the 9 Merlin engines are cut off, the stage separation occurs followed by SES1 (second engine start). After SES1, the rocket booster (stage 1) performs a boost back burn to return to Earth. The boost back burn kills the forward velocity attained and flips the booster to facilitate landing. A simple deceleration won’t suffice as the rocket must land erect. During the re-entry burn, the engines light up again to slow down the booster and the landing burn reduces its velocity to 0. During this entire process, Falcon 9’s booster analyzes its velocity, position and surroundings to adjust its trajectory to reach the drone ship using the INS (inertial navigation system) and GPS (global positioning system). The boosters automatically re-align themselves if the pre-dictated flight path trajectory isn’t maintained. SpaceX uses the convex optimisation algorithm in machine learning for this process.
From Figure 1, we can infer that the algorithm has been modified after several iterations to improve the success rate of automated booster landings.
Over the course of many years, using the iterative design process, SpaceX has managed to improve the accuracy of the booster landings by a vast margin. They managed to accomplish this by enhancing the engineering of their rockets and with experience. SpaceX spent multiple years in the trial-and-error phase of the rocket. They began with a prototype take-off and landing vehicle called Grasshopper, which completed 8 successful trials. A huge issue faced was precision. The engineering of the rockets has been a major factor in the significant improvement. This was harder when the flight path was towards the sea so that the booster could land on a drone ship instead of a landing pad. However, drone ship landing is still preferred as it requires relatively less fuel.
Some key technologies that helped achieve this precision include:
The AI developments that have increased and further have the potential to increase the success rate of booster landings include the INS and the GPS. The INS uses sensors to measure the position, orientation, trajectory and velocity of the vehicle. It is composed of at least 3 gyros and 3 accelerometers to obtain navigation information. The GPS helps in detecting the geolocation. These work simultaneously to maintain the pre-programmed flight path and make alterations based on current conditions if required. The rocket is adjusted using analysis done by the convex optimisation algorithm as briefly mentioned above.
The convex optimisation algorithm uses logic and mathematics to make suitable decisions for the trajectory of the booster. Stanford computer scientists devised this algorithm to increase precision in specification of target location, increase efficiency and predictability and make changes in the flight path by recalculating booster trajectory as it reacts to its environment. It generates custom flight code and turns a mathematical problem into a high speed solver. As most optimizations occur offline, the online solution runs extremely fast. Additionally, this incorporation of convex optimization in space technology is of great importance as convex optimization in an important part of ML. A lot of real-life problems can today be modelled as convex optimization problems.
We suggest that this technology that lands the Falcon 9 booster should also be implemented in future Martian missions. This will resolve the issue of our incapability of landing precisely on Mars. Figure 3 displays the vast landing range of Martian missions. While devising landing precision, engineers create an imaginary ellipse within which the object is expected to land. The area of this ellipse has reduced with each Martian mission and has the potential to be as small as the area of a SpaceX drone ship. The primary reason for the higher uncertainty of Martian landings as opposed to the Falcon 9 booster stage is the usage of parachutes for landing on Mars.
B. Earth surface Mineral dust source InvesTigation and confocal diffuse tomography
Earth surface Mineral dust source InvesTigation or EMIT is a mission designed to locate arid dust source regions using spectroscopy in visible light and short-wave infrared. The maps are intended to analyze the role of mineral dust in radiative effect (energy imbalance leading to change in temperature) of the atmosphere. Darker mineral dust absorbs sunlight whereas brighter mineral dust reflects it back into the atmosphere.
The instrument operates from the International Space Station. It captures infrared light and upon studying the light after it splits into a spectrum, it can classify it based upon what the dust is made of as each element has a different spectroscopic value. With this analyzing and sorting technique, spectroscopy can eventually also be used to detect whether there is oxygen or water on a planet as they have unique spectroscopic values, and those elements indicate extraterrestrial life.
The issue with EMIT is that it has to be scheduled according to atmospheric conditions. It can’t obtain valid data when target regions are concealed with clouds. EMIT schedules have to take in account the cloud distribution and density over the target region prior to analysis.
To see through clouds and fog, researchers at Stanford University came up with a system incorporating a photon detector and a laser. In the model, when a laser scans an obstacle, one photon manages to pass through it and reaches the object on the other side. The photon then comes back to the initial side to enter the detector. That photon is studied and is used to understand the object behind the obstacle. In this case, the object referred to is the ground dust. This method is known as confocal diffuse tomography
Conventional technologies that would solve this issue either operate at a microscopic level or their prerequisites include in-depth information about the target. This poses an issue as it’s not very feasible for understanding mineral dust over a very large scale. Additionally, the atmosphere can only be predicted to a certain extent. Thus, we suggest that this method of confocal diffuse tomography is highly suitable for surface analysis and should be implemented.
C. Exoplanet Detection Algorithm
Exoplanets are hundreds of light years away from Earth and are located in the midst of discs containing dust particles displaced by the exoplanet. These discs are extremely thick making exoplanets harder to detect. NASA’s new telescopes- JWST (2021) and NGRST (2027) provide or aim to provide more efficient data using new technologies. Machine learning directs the time usage towards theoretical interpretations instead of arranging and managing the data and can help identify exoplanets from a list of potential candidates with a higher accuracy rate than humans.
We devised an exoplanet detection algorithm, given in Figure 4. This algorithm requires any dataset like those from NASAs website or Kaggle. This particular algorithm uses a Kaggle dataset. In the Kepler mission, only the transit method was used to identify exoplanets. In the transit method, a star is observed and if a dip is noticed in the star’s brightness, it means that an exoplanet is passing in front of it. By measuring the size and intensity of the brightness dip, the size of the exoplanet can also be calculated.
This algorithm takes Kepler data into account for analysis. It takes the data set and selects relevant columns for both the train and test division. ‘a’ contains the values that must be studied to determine the result and ‘b’ contains the result. Then, the train and test data are split, with 70% of the data going to the train set and the remaining 30% going to the test set. When predictions are calculated, this algorithm gives an accuracy rate of 98%.
For AI to be a promising tool, it must detect the weakest and strangest signals. The train and test algorithm does not provide 100% accuracy. The job of a train and test algorithm is to detect variations. However, in certain situations, new cases emerge, or a situation occurs wherein the dip in light intensity is very minute due to a relatively smaller exoplanet radius or a relatively brighter star. Additionally, the more the train data, the better the test results. AI can also minimize or eliminate all unimportant data however it can eliminate important data as well. This produces the need for sharper instruments for better data collection and of course, more efficient algorithms. It also matters how much area a telescope can cover in a certain amount of time and what its bandwidth is. All of this has improved with the introduction of newer telescopes.
We thus suggest highly advanced future telescopes with increased focal length to produce better magnification. Higher magnification will help in detecting fainter stars and fainter exoplanet signals but with reduced FOV. To compensate for reduced FOV, telescopes must move at faster speeds and avoid downlinking bulk data. Onboard planning systems on the telescope itself must be extremely efficient to assist this solution.
As for positioning, the L2 Lagrange point is currently the most apt location for a telescope due to limitations faced in L3, L4 and L5. These limitations include the presence of L3 directly behind the Sun, opposite to Earth’s orbit and the fact that there exist no known objects at L3. Additionally, L4 and L5 are gravitationally stable. This results in accumulation of dust and asteroids in this region.
Figure 5 shows comparisons between NASA’s 4 prominent space telescopes: Hubble Space Telescope, Transiting Exoplanet Survey Satellite or TESS, James Webb Space Telescope or JWST and Nancy Grace Roman Space Telescope or Roman Telescope. The comparison depicts the growth of equipment technology with time.
TABLE I
Comparison of telescopes
Characteristics |
Hubble (1990) |
TESS (2018) |
JWST (2021) |
Roman (2027) |
Field of View (FOV) |
0.001 deg2 |
24*96 |
2 times of Hubble |
216 times of Hubble |
Orbit |
orbit around earth |
highly elliptical orbit around earth |
L2 (1.5 million km away from Earth) |
L2 (1.5 million km away from Earth) |
Exoplanet detection method |
transit |
transit |
transit |
transit, microlensing |
Wavelength range |
0.2 microns (UV) – 1.7 microns (near infrared) |
0.6 microns (blue) – 1 micron (near infrared) |
0.6 microns (orange) – 5 microns (mid infrared) |
0.5 microns (blue-green) – 2.3 microns (near infrared) |
Primary Mirror Size |
2.4m |
10.5cm |
6.5m |
2.4m |
Instruments |
WFC3, COS, ACS, STIS, NICMOS, FGS |
4 identical cameras, DHU |
MIRI, FGS, NIRCam, NIRSpec, |
Coronagraph Instrument, WFI |
Note that in Table 1, the purpose of each telescope varies and thus they have different priorities for instrumentation. However, all are involved in exoplanet detection. Further, focal length is inversely proportional to FOV but directly proportional to magnification.
D. Data classification and calibration pipeline and Morpheus model (JWST)
Instruments such as the JWST (James Webb Space Telescope), HST (Hubble Space Telescope), TESS (Transiting Exoplanet Survey Satellite) and Kepler space telescope collect data which is processed for researchers who further identify useful patterns. With space, come huge data sets. ML makes their analysis more efficient.
JWST is a complicated structure with 4 instruments having 50 filters and 15 modes. Data from JWST passes through a pipeline. NASA’s Deep Space Network feeds down into the Space Telescope Science Institute (controls JWST operations) processing systems. No manual interaction is required till the data goes into the archive for research. Automated systems do all the calibrations, classifications and compilations, but images without metadata are useless.
The pipeline processes a huge amount of data. This data must be complete, checked for errors and managed to ensure that the files are archived without error. Software engineers must perform a lot of data completion checking. The data is further stored in files for research and variations in regular datasets draw attention. Algorithms must be redesigned and modified periodically to accommodate the most efficient means of data calibration and classification (by providing latest data) to minimize human participation, especially in data completion checking which needless to say, is a tedious task.
Apart from the calibration pipeline, another AI based and JWST classification-oriented technology includes the Morpheus model which is an algorithm that uses CNNs (convolutional neural networks) to analyze images. It interprets astronomical objects in complete colour and high resolution after they are captured by the James Webb Space Telescope. The algorithm is a product of modifications done on a semantic segmentation algorithm used to classify astronomical objects in the Hubble Space Telescope based on their shape. The semantic segmentation has majorly been scaled up and is used to associate every pixel with a specific category.
E. Autonomous Mission Operations in Martian Rovers (Curiosity and Perseverance)
Using AI in Martian rovers’ mission operations promotes efficiency. The autonomy helps devise missions that can’t be operated by ground control because of high communication bandwidth leading to delay. Automation has increased capabilities of the Mars rover missions, helping them navigate and collect data on their own.
a. Curiosity’s wheels
Earlier, AI on Curiosity wasn’t used for one of the main operations a rover performs, self-driving. Although automated driving capabilities were present, they weren’t put to use because the rover wouldn’t steer off small sharp rocks which would cause significant damage to the wheels. During inclination, the wheels lift off the surface of the rock and spin in the air, increasing the pressure exerted on the wheels that are still touching the ground. Moreover, on uneven landscapes, the gap between them is at risk.
The traction control algorithm adjusts each wheel’s speed, reducing the pressure put on them by the surface obstructions. The algorithm uses real-time data to track the changes in order to record the contact points of the wheels and the surface. Further, it calculates the speed to avoid the rover from slipping. The traction control software is turned on by default but can be turned off when needed.
A major issue faced by all rovers is driving on sand. This is particularly tough because the wheels of rovers tend to sink in the sand. To solve this issue, we suggest for there to be a pressure adjustment system. The aim of the system would be to reduce tyre pressure whenever the Rover comes in contact with sand on the Martian surface. This would also increase the contact area of the wheels of the Rover with the Martian surface, preventing it from slipping and reducing the damage faced by the wheels. The rover must be equipped with a pressure refill system to obtain air from Mars’ atmosphere and divert those reserves to the wheels when they get back to the normal surface. For this system to work, the rover is expected to effectively detect the presence of sand.
b. AEGIS
The curiosity rover used to take images that were relayed back to Earth. After analyzing them, scientists picked suitable target locations. This system of bulk downlinking was not very efficient and was replaced by AEGIS (Autonomous Exploration for Gathering Increased Science). AEGIS selects targets automatically based on predefined properties. It shoots targets with a laser to further analyze them in depth and determine composition and properties of rocks and soils. NASA scientists took inspiration for this from Curiosity’s predecessor, Opportunity. AEGIS is like a rock detector. Using this software, Curiosity has the ability to scan unexplored sites to determine shooting targets which have the potential to provide promising results instead of shooting randomly using the ChemCam. Target prioritization is the main component of AEGIS and is executed using image analysis by the NavCam. The NavCam consists of 2 pairs of black and white cameras, each with a FOV of 45°, that operate in the frequency range of visible light. The issue with the operation of AEGIS was that the NavCam wouldn’t suffice when it came to extremely small targets such as veins in a rock. So, the AEGIS system also does image analysis of images obtained by the ChemCam’s Remote Micro Imager (RMI).
Another problem with AEGIS was its integration. AEGIS was originally designed for the system of the Opportunity Rover which was less complicated than that of the Curiosity Rover. However, with continuous efforts, it was successfully integrated.
2. Perseverance
a. Navigation
The Perseverance Rover uses AI to reach locations as fast as possible and minimizes science activities to speed up travel when required. However, the speed is much slower than what it would have been if humans were driving the rover. The rover has an AutoNav system, granting it the ability to capture and process images and move accordingly. Martian rovers may not be the fastest but they are really good at detecting geometric hazards such as rocks and pits with a 3D computer vision enabled terrain map.
Additionally, there is a common misconception that autonomous operations and ground control operations are completely separate. They are actually quite integrated since AI isn’t developed enough yet. Ground control has to send commands to the rover to make it perform tasks. So, even autonomous capabilities are ground control oriented. One of the main aims with the advancement of AI is to increase the efficiency of the rover in independent decision making so as to increase complete autonomy. The long term vision is to create a Google Maps like structure for Mars.
As for speed, NASA’s lunar rovers were 100 times faster than today’s Martian rovers. However, they were entirely operated by humans. Martian rovers move slowly to prevent damage to their structures. Speed can only be improved when the efficiency of autonomous detection of potential hazards increases. These potential hazards also include unfavourable surfaces, such as sand. Tools such as the MobSketch, Mars 2020 Mobility Planning Tool have been created to help with this.
b. Onboard Planning
Onboard planning is a key feature in the rover and it’s essential for the efficient use of the “current” conditions of the rover. It refers to decisions that the rover has to take and the tasks it has to perform based on its circumstances. If a new event occurs in its surroundings and it is perceived as useful for scientific pursuits, the rover should be able to take action there. For instance, if an object such as a rock exhibits unusual motion and the rover finds that to be valuable, it should be able to take the decision of analyzing the rock further.
Parallel energy distribution and prioritization while performing activities simultaneously is another example of onboard planning. The rover schedules activities based on squeaky wheel optimization. In squeaky wheel optimization, a solution is created and analyzed. Tasks are prioritized based on the results that are obtained after analyzing the solution. The cycle continues until an appropriate solution is found. Additionally, certain activities have certain requirements for their execution and thus the rover ensures that the inputs required for each activity are available at the time the activity is scheduled for. The onboard planner ensures that there are no constraints in performing the activities as due to limited resources, they cannot be shifted or removed after getting scheduled. Also, there is a wide variety of ways in which the tasks can be scheduled. To determine order of execution, a prototype GUI has been developed.
Upon studying the role of AI in 5 spheres of space exploration, from this paper, we can conclude that Falcon 9’s convex optimization algorithm must be implemented in missions to Mars. Moreover, the efficiency of Falcon 9’s INS and GPS systems can always be increased. This ties into better data collection, an essential for exoplanet detection. For exoplanets, this will come with improved instrumentation while for the Falcon 9, it will come with improved satellite receivers, a form of improved instrumentation. However, proper analysis of data collected is as important as proper data collection. Algorithms including JWST’s data calibration pipeline and the Morpheus model must always be updated and refined to accommodate fresh data in order to ensure highest quality analysis. As for EMIT, although difficult to implement on a large scale, confocal diffuse tomography is an apt method to have the advantage of operating with free will without prior atmospheric analysis. Lastly, AI in Martian rovers is arguably one of its most important applications in space exploration. The goal must be to make future rovers advanced enough to perform not only collection but also onboard thorough analysis of scientific material without any human intervention. Moreover, with the long-term goal of terraforming Mars, most operations like these will have to be reliant on ML.Thus, we can conclude that strengthening and expanding AI applications in space has a lot of prospects. These need to be worked upon efficiently with innovation.
[1] https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing
[2] University of Georgia. \"Researchers focus AI on finding exoplanets.\" ScienceDaily. ScienceDaily, 7 February 2023.
Copyright © 2023 Prisha Khichadi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET52267
Publish Date : 2023-05-14
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here